24 research outputs found

    Traceability for Mutation Analysis in Model Transformation

    Get PDF
    International audienceModel transformation can't be directly tested using program techniques. Those have to be adapted to model characteristics. In this paper we focus on one test technique: mutation analysis. This technique aims to qualify a test data set by analyzing the execution results of intentionally faulty program versions. If the degree of qualification is not satisfactory, the test data set has to be improved. In the context of model, this step is currently relatively fastidious and manually performed. We propose an approach based on traceability mechanisms in order to ease the test model set improvement in the mutation analysis process. We illustrate with a benchmark the quick automatic identification of the input model to change. A new model is then created in order to raise the quality of the test data set

    Security Assessment for Application Network Services Using Fault Injection

    No full text

    A New Approach for Software Testability

    No full text
    International audienceIn this paper, we propose another point of view with respect to testability. Instead of assessing testability through metrics, the idea is to identify very good practices in testing and to check that they are really implemented in the model or in the final code

    Cooperative Component Testing Architecture in Collaborating Network Environment

    No full text

    Testability, fault size and the domain-to-range ratio

    No full text

    Traon. Mutation Analysis Testing for Model Transformations

    No full text
    Abstract. In MDE, model transformations should be efficiently tested so that it may be used and reused safely. Mutation analysis is an efficient technique to evaluate the quality of test data, and has been extensively studied both for procedural and object-oriented languages. In this paper, we study how it can be adapted to model oriented programming. Since no model transformation language has been widely accepted today, we propose generic fault models that operations that constitute this process: model navigation, model’s elements filtering, output model creation and input model modification. Then, we propose a set of specific mutation operators which are directly inspired from these operations. We believe that these operators are meaningful since a large part of the errors in a transformation are due to the manipulation of complex models regardless of the concrete implementation language.

    Defining and measuring policy coverage in testing access control policies

    No full text
    Abstract. To facilitate managing access control in a system, security officers increasingly write access control policies in specification languages such as XACML, and use a dedicated software component called a Policy Decision Point (PDP). To increase confidence on written policies, certain types of policy testing (often in an ad hoc way) are usually conducted, which probe the PDP with some typical requests and check PDP’s responses against expected ones. This paper develops a first step toward systematic policy testing by defining and measuring policy coverage when testing policies. We have developed a coverage-measurement tool to measure policy coverage given a set of XACML policies and a set of requests. We have developed a tool for request generation, which randomly generates requests for a given set of policies, and a tool for request reduction, which greedily selects a nearly minimal set of requests for achieving the same coverage as the originally generated requests. To evaluate coverage-based request reduction and its effect on fault detection, we have conducted an experiment with mutation testing on a set of real policies. Our experimental results show that the coverage-based test reduction can substantially reduce the size of generated requests and incur only relatively low loss on fault detection. We also conduct a study on the policy coverage achieved by manually generated requests.
    corecore